![]() CALCULATION OF PACKAGE WALL DENSITY IN THE COMMERCIAL TRAILER LOADING INDUSTRY
专利摘要:
PACKAGE WALL DENSITY CALCULATION IN THE COMMERCIAL TRAILER LOADING INDUSTRY A method and apparatus for utilizing a three-dimensional (3D) depth imaging system for use in commercial trailer loading is disclosed. The method and apparatus can be configured to determine a loading efficiency score for a trailer in various ways. In one embodiment, the method and apparatus may determine the score by receiving a point cloud data set based on 3D image data, analyzing the point cloud data set, generating a set of data slices based on the point cloud data set, each data slice corresponding to a portion of the 3D image data, estimating a set of missing data points in each data slice in the set of data slices, and calculating a loading efficiency score based on the generated set of data slices and the estimated set of missing data points. Figure for the abstract: Fig. 1. 公开号:FR3076643A1 申请号:FR1873874 申请日:2018-12-21 公开日:2019-07-12 发明作者:Adithya H. Krishnamurthy;Justin F. Barish;Miroslav Trajkovic 申请人:Symbol Technologies LLC; IPC主号:
专利说明:
Description Title of the invention: CALCULATION OF DENSITY OF PACKET WALLS IN THE COMMERCIAL TRAILER LOADING INDUSTRY BACKGROUND OF THE INVENTION In the trailer loading industry, there are many approaches to assess whether a trailer has been loaded efficiently. Various metrics are used by different companies in relation to the efficiency of trailer loading. One way to measure efficiency is to use a three-dimensional camera combined with algorithms to detect the fill level of a trailer in real time. A trailer filling level is often calculated with reference to the "walls" which are made inside the trailer from the materials stored in the trailer. However, this approach fails to determine where a trailer can be filled inefficiently, that is, the approach fails to determine where spaces appear in the filling of the trailer. Therefore, there is a need for a way to calculate a density of package walls in the commercial trailer loading industry. BRIEF DESCRIPTION OF THE VARIOUS VIEWS OF THE DRAWINGS The attached figures, where identical reference numbers refer to identical or functionally similar elements in all the separate views, as well as the detailed description below, are incorporated into the specification and form part of it, and further serve to illustrate embodiments of concepts which include the claimed invention, and explain various principles and advantages of these embodiments. [Fig.l] Figure 1 is a perspective view, as viewed from above, of a loading dock comprising a loading facility, a plurality of transfer bays, a plurality of vehicles, and a plurality of vehicle storage areas , in accordance with exemplary embodiments presented in this document. [0005] [fig.2A] FIG. 2A is a perspective view of the loading installation of FIG. 1 showing a vehicle storage area at the quay at the level of a transfer bay, in accordance with the exemplary embodiments presented in this document. [0006] [fig.2B] Figure 2B is a perspective view of a trailer monitoring unit (TML) of Figure 2A, in accordance with exemplary embodiments presented in this document. [0007] [fig.3] Figure 3 is a block diagram representative of an embodiment of a server associated with the loading installation of Figure 2A and the TML of Figure 2B. [0008] Figure 4 is a set of images detailing the interior of a commercial trailer. [0009] [fig.5] Figure 5 is a flow diagram of a method for calculating a wall density for use in loading commercial trailers. Those skilled in the art will appreciate that the elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated with respect to other elements to help improve understanding of the embodiments of the present invention. The apparatus and the process components have been represented when appropriate by conventional symbols in the drawings, showing only the specific details which are relevant for the understanding of the embodiments of the present invention so as not to obscure the description with details which will be readily apparent to those skilled in the art benefiting from the description presented here. Description of Embodiments [0012] Methods and systems for a three-dimensional (3D) depth imaging system for use in loading commercial trailers are presented in this document. In some embodiments, the 3D depth imaging system may include various components, such as, but not limited to, a 3D depth camera configured to capture 3D image data, and an imaging efficiency application. load (app) running on one or more processors. In some cases, the 3D depth camera can be oriented in one direction to capture 3D image data from a storage area associated with a vehicle. The 3D storage area can be a trailer used to transport goods. The loading efficiency application can be configured to determine, based on 3D image data, a loading efficiency score for the storage area. The loading efficiency application can determine the loading efficiency using a point cloud data set based on 3D image data, analyzing the point cloud data set, generating a set of data slices based on the point cloud data set, each data slice corresponding to part of the 3D image data, estimating a set of missing data points in each data slice in the set of data slices, and calculating a loading efficiency score based on the generated set of data slices and the estimated set of missing data points. In some cases, the system may perform additional steps, fewer steps, or different steps. Stated differently, the description relates to the detection of the filling level of a set of package walls as measured from floor to ceiling of a trailer, allowing the calculation of the overall efficiency and by regions in the manner according to which the trailer was loaded. Generally, trailers are loaded so that walls are constructed from floor to ceiling from the materials loaded into the trailer. The sections of these walls can be considered as regions. Thus, for example, a wall of packages in a trailer could be divided into five equal regions representing twenty percent of the vertical space of the wall. One way to measure the filling level of a trailer is to measure the distance from the door of the trailer to the most distant packet wall. This measurement can be plotted against time while the trailer is loaded. This measure does not provide a complete picture of the level of efficiency of filling the trailer, since it does not divide the trailer load as a series of pack walls, but rather is a chronological view of the linear distance to the wall of packages. A more accurate way to calculate the loading efficiency of a trailer is to measure the wall-to-pack loading per pack wall while they are being built, and to calculate the wall quality, which is a measure of pack density of the wall. In order to achieve this, the air spaces in each wall of packages can be calculated. In one example, the wall of packages is divided into analyzable parts, such as thin slices, and the three-dimensional (3D) composition of each slice is analyzed. Because the data is analyzed in slices, informed decisions can be made regarding missing data and noise, giving accurate and stable airspace detection. FIG. 1 is a perspective view, as seen from above, of a loading dock 100 comprising a loading installation 101, a plurality of transfer bays 102d to 110d, a plurality of vehicles 106v and llOv, and a plurality of vehicle storage areas 102s to 110s, in accordance with exemplary embodiments presented in this document. In some embodiments, the loading dock 100 may, for example, be associated with a retail store, a wholesale store, or other such commercial building. In other embodiments, the loading dock 100 may be associated with a storage facility, or a waypoint facility, to receive packages, boxes, or other transportable objects or goods, generally involved in the distribution and logistics of these transportable objects or goods. Additional embodiments are envisaged here such that the loading dock 100 allows the loading and unloading of transportable objects or goods at a store, an installation, or another similar location of this kind. . For example, Figure 1 shows a loading facility 101, which, as described, can be a retail store, a storage facility, or another similar location of this kind which allows the loading and unloading of transportable objects or goods. The loading facility 101 comprises a plurality of transfer bays 102d to 1 lOd. For example, a transfer bay 104d is shown as being unoccupied, and comprises an opening of a size identical or similar to that of an opening of a vehicle storage area. As shown in FIG. 1, the transfer bay 104d may further comprise padding or insulation to receive a trailer (for example, a vehicle storage area) against the wall of the loading installation 101. The bay 104d transhipment may further include a retractable door positioned in the opening of the 104d transfer bay, where the door can be opened to provide access to the vehicle storage area of a trailer from the installation of loading 101. As described in this document, the transfer bay 104d is representative of the remaining represented transfer bays, such as the transfer bays 102d, 106d, 108d and llOd, where the transfer bays 102d, 106d, 108d and 1 lOd may have characteristics or functionality similar to those described in this document for the 104d transfer bay. In various embodiments, an opening of a vehicle storage area can be the opening of a trailer, where the trailer can be towed by a semi-trailer, a truck, or another similar capable vehicle. hitch and move a trailer (for example, a vehicle storage area), as described in this document. In some embodiments, the floor of a trailer, when docked, may be on the same level, or about the same level, as the floor of a transfer bay (for example, the bays of trans edge 102d to HOd) of loading facility 101. Figure 1 also shows a plurality of vehicle storage areas 102s, 106s and 110s. The vehicle storage areas 102s, 106s and 110s may each be a storage area associated with a vehicle, for example, a trailer or other transportable storage area (for example, 102s, 106s and 110s) associated with a semi-trailer. trailer, truck, or other such large vehicle (for example, 106v and 1 lOv) as described in this document. For example, as shown in Figure 1, vehicles 106v and HOv are each associated with vehicle storage areas 106s and 110s respectively. The vehicles 106v and HOv can each be responsible for maneuvering their respective vehicle storage areas 106s and 110s to the respective transfer bays, such as the transfer bays 106d and HOd. As described in this document, the vehicle storage areas 102s, 106s and 110s each include an opening, generally at one end, which is of a size similar or identical to the openings of the transfer bays 102d to HOd. In this way, the vehicle storage areas 102s, 106s and 110s can interface with the loading bays 102d to 1 lOd or occupy them in order to allow the loading and unloading of packages, boxes, or other transportable objects or goods as described in this document. For example, as shown in Figure 1, the vehicle storage area 102s is shown as a trailer occupying the transfer bay 102d. Therefore, the opening of the vehicle storage area 102s interfaces with the opening of the transfer bay 102d so that the interior of the vehicle storage area 102s can be seen or that access to this can be done from the transfer bay 102d. Similarly, the vehicle storage area 110s is also shown as a trailer occupying the transfer bay 110d, where the opening of the vehicle storage area 110s interfaces with the opening of the storage bay. 1 lOd transhipment so that the interior of the 110s vehicle storage area can be seen or that access to it can be made from the 1 lOd transhipment bay. The vehicle storage area 106s is shown as not currently occupying the transfer bay 106d. Vehicle storage areas, such as 102s, 106s and 110s, can have different sizes, lengths, or other dimensions. For example, in one embodiment, the vehicle storage area 102s can be associated with a 63 feet long trailer, the vehicle storage area can be associated with a 53 feet long trailer, and the storage area 110s vehicle storage can be paired with a 73 foot long trailer. Other variants of dimensions, sizes, and / or lengths of vehicle storage area are envisaged in this document. FIG. 2A is a perspective view 200 of the loading installation 101 of FIG. 1 showing a vehicle storage area 102s at the quay at the level of a transfer bay 102d, in accordance with examples of modes of achievements presented in this document. For example, Figure 2A shows the vehicle storage area 102s, which, in the embodiment of Figure 2A, is an interior view of the vehicle storage area 102s of Figure 1. Figure 2A also shows the transfer bay 102d, which in the embodiment of Figure 2A is an interior view of the transfer bay 102d of Figure 1. As shown in Figure 2A, the vehicle storage area 102s occupies the bay of transhipment 102d, exposing the interior of the vehicle storage area 102s inside the loading facility 101. The vehicle storage area 102s includes packages, boxes, and / or other objects or goods transportable, comprising packages 208pl to 208p3, which may, in certain embodiments, correspond to walls of packages, as described in this document. The packages 208pl to 208p3 may be in a loading or unloading state in the vehicle storage area 102s. For example, worker 212 may be in a state of loading or unloading additional packages 210 into or out of the vehicle storage area 102s. In some embodiments, the manager 206 can additionally monitor, assist, or otherwise facilitate the loading or unloading of packages, boxes, and / or other transportable objects or goods (e.g., packages 208pl to 208p3 or 210) in or out of the vehicle storage area 102s. For example, the manager 206 can use a dashboard application running on a client device 204 as described in this document. Figure 2A also shows a trailer monitoring unit (TMU) 202. The TMU 202 can be a mountable device which includes a 3D depth camera for capturing 3D images (for example, image data 3D) and a photorealistic camera (for example, 2D image data). The photorealistic camera can be an RGB camera (red, green, blue) for capturing 2D images. The TMU 202 may also include one or more processors and one or more computer memories for storing image data, and / or for executing applications which perform analytical or other functions as described in this document. In various embodiments, and as shown in FIG. 2A, the TMU 202 can be mounted in the loading facility 101 and oriented in the direction of the vehicle storage area 102s to capture 3D image data and / or 2D of the interior of the vehicle storage area 102s. For example, as shown in FIG. 2A, the TMU 202 can be oriented so that the 3D and 2D cameras of the TMU 202 look down on the length of the vehicle storage area 102s so that the TMU 202 can scan or detecting walls, floor, ceiling, packages (for example, 208pl to 208p3 or 210), or other objects or surfaces with the vehicle storage area 102s to determine the 3D and 2D image data. The image data can be processed by said one or more processors and / or memories of the TML 202 (or, in certain embodiments, one or more processors and / or memories remotely from a server) to carry out an analysis. , functions, such as graphical or imagery analysis, as described by said one or more various flowcharts, block diagrams, methods, functions, or various embodiments presented in this document. In some embodiments, for example, the TML 202 can process 3D and 2D image data, as scanned or detected by the 3D depth camera and the photorealistic camera, for use by other devices (for example, the client device 204 or the server 301, as described later in this document). For example, said one or more processors and / or said one or more memories of the TML 202 can process the image data scanned or detected from the vehicle storage area 102s. Image data processing can generate post-scan data which may include metadata, simplified data, normalized data, result data, status data, or alert data as determined from the original scanned or detected image data. In some embodiments, image data and / or post-scan data can be sent to a client application, such as a dashboard application (app) described in this document, for viewing, manipulation , or some other interaction. In other embodiments, the image data and / or the post-scan data may be sent to a server (for example, the server 301 as described later in this document) for storage or for further manipulation. As shown in Figure 2A, image data and / or post-scan data can be received on the client device 204. The client device 204 can implement a dashboard application to receive image data and / or post-scan data and display this data, for example, in a graphical or other format, for the manager 206 to facilitate the unloading or loading of packets (for example, 208pl to 208p3 or 210), as described in this document. In some embodiments, the dashboard application can be implemented through a web platform such as Java J2EE (for example, Java Server Faces) or Ruby on Rails. In these embodiments, the web platform can generate or update a dashboard application user interface by generating a dynamic web page (for example, using HTML, CSS, JavaScript) or by client-facing mobile application (for example, via Java for an Android-based application from Google or Objective-C / Swift for an iOS-based application from Apple), where the user interface is displayed through the dashboard application on the client device, for example, client device 204. In some embodiments, the dashboard application can receive the image data and / or the post-scan data and display this data in real time. Client device 204 can be a mobile device, such as a tablet, smartphone, laptop, or other such mobile computing device. Client device 204 may implement an operating system or platform to run dashboard (or other) applications or functionality, including, for example, any of Apple's iOS platform, Google’s Android platform, and / or Microsoft’s Windows platform. Client device 204 may include one or more processors and / or one or more memories implementing the dashboard application or to provide other similar functionality. Client device 204 may also include wired or wireless transceivers for receiving image data and / or post-scan data as described herein. These wired or wireless transceivers can implement one or more communication protocol standards including, for example, TCP / IP, Wifi (802.11b), Bluetooth, or any other similar communication protocols or standards. In some embodiments, the image data and / or the post-scan data can be sent to a server, such as the server 301 described in this document. In these embodiments, the server can generate post-scan data which may include metadata, simplified data, normalized data, result data, status data, or alert data as determined at from the original scanned or detected image data provided by the TML 202. As described in this document, the server or the central office can store this data, and can also send the image data and / or the post data sweeping to a dashboard application, or to another application, implemented on the client device, such as the dashboard application implemented on the client device 204 of FIG. 2A. Figure 2B is a perspective view of the TML 202 of Figure 2A, in accordance with exemplary embodiments presented in this document. In the exemplary embodiment of Figure 2B, the TML 202 may include a mounting bracket 252 for orienting or otherwise positioning the TMU 202 in the loading facility 101 as described in this document. The TMU 202 may further include one or more processors and one or more memories for processing the image data as described in this document. For example, the TMU 202 may include a flash memory used to determine, store, or otherwise process the imaging data and / or the post-scan data. The TMU 202 can include a 3D depth camera 254 to capture, detect, or scan the 3D image data. For example, in some embodiments, the 3D depth camera 254 may include an infrared (IR) projector and an associated IR camera. In these embodiments, the IR projector projects a pattern of light or IR beams onto an object or surface, which, in various embodiments presented herein, may include surfaces of a vehicle storage area (for example, the vehicle storage area 102s) or objects in the vehicle storage area, such as boxes or packages (for example, packages 208pl to 208p3 or 210). IR light or beams can be distributed over the object or surface in a dot pattern by the IR projector, which can be detected or scanned by the IR camera. A depth detection application, such as a depth detection application running on said one or more processors or memories of the TMU 202, can determine, based on the dot pattern, various depth values, for example , the depth values of the vehicle storage area 102s. For example, an object of near depth (for example, boxes, packages, etc. nearby) can be determined where the points are dense, and objects of distant depth (for example, boxes, packages, etc. . distant) can be determined where the points are more dispersed. The various depth values can be used by the depth detection application and / or the TMU 202 to generate a depth map. The depth map may represent a 3D image, or contain 3D image data of objects or surfaces that have been detected or scanned by the 3D depth camera 254, for example, the vehicle storage area 102s and n ' matter what objects or surfaces in it. The TMU 202 may further include a photorealistic camera 256 for capturing, detecting, or scanning 2D image data. The photorealistic camera 256 can be an RGB camera (red, green, blue) for capturing 2D images having RGB pixel data. In some embodiments, the photorealistic camera 256 can capture 2D images, and associated 2D image data, at the same, or similar time, as the 3D depth camera 254 so that the TMU 202 can have two sets of 3D image data and 2D image data for a particular surface, object, or scene at the same time, or at a similar time. Figure 3 is a block diagram representative of an embodiment of a server associated with the loading installation 101 of Figure 2A. In some embodiments, the server 301 may be located in the same facility as the loading facility 101. In other embodiments, the server 301 may be located in a remote location, such as on a platform in the cloud or other remote location. In another embodiment, the server 301 can be coupled in communication to a 3D depth camera (for example, the TMU 202). The server 301 is configured to execute computer instructions for performing operations associated with the systems and methods as described in this document, for example, implementing the examples of operations represented by the block diagrams or the flowcharts of the drawings accompanying this description. Server 301 may implement enterprise service software which may include, for example, RESTful API services (representational State transfer), message queuing service, and event services which can be provided by various platforms or specifications, such as the J2EE specification implemented by any of the Oracle WebLogic Server platform, the JBoss platform, or the IBM WebSphere platform, etc. Other technologies or platforms, such as Ruby on Rails, Microsoft.NET, or similar can also be used. In addition, the TMU 202 may further include a network interface to allow communication with other devices (such as the server 301 in Figure 3 as described in this document). The network interface of the TMU 202 may include any suitable type of communication interface (s) (e.g. wired and / or wireless interfaces) configured to operate according to any one (s) suitable protocol (s), for example, Ethernet for wired communications and / or IEEE 802.11 for wireless communications. As described below, the server 301 can be specifically configured to perform the operations represented by the block diagrams or flow diagrams of the drawings described in this document. The example of server 301 in FIG. 3 comprises a processor 302, such as, for example, one or more microprocessors, controllers, and / or any suitable type of processor. The example server 301 in FIG. 3 further comprises a memory (for example, a volatile memory or a non-volatile memory) 304 accessible by the processor 302, for example, via a memory controller (not shown). The example processor 302 interacts with the memory 304 to obtain, for example, instructions that can be read by a machine stored in the memory 304 corresponding, for example, to the operations represented by the flowcharts of this description. In addition or as a variant, instructions that can be read by a machine corresponding to the example operations of the diagrams or flowcharts can be stored on one or more removable media (for example, a compact disc, a versatile digital disc, removable flash memory, etc.), or over a remote connection, such as the Internet or a cloud-based connection, which can be paired with the 301 server to provide access to machine-readable instructions stored on those -this. The example of server 301 in FIG. 3 can also include a network interface 306 to allow communication with other machines via, for example, one or more computer networks, such as '' a local area network (LAN) or a wide area network (WAN), for example, the Internet. Example network interface 306 may include any suitable type of communication interface (e.g., wired and / or wireless interfaces) configured to operate according to any suitable protocols, e.g. Ethernet for wired communications and / or IEEE 802.11 for wireless communications. The example of server 301 in FIG. 3 comprises input / output (I / O) interfaces 308 to allow reception of user input and communication of output data to the user, which can include, for example, any number of keyboards, mice, LSB controls, optical controls, displays, touch screens, etc. FIG. 4 shows a set of images 400, comprising a photoreactive image 402 of a commercial trailer, and an image 404 of the same trailer showing 3D image data as captured by a 3D depth camera ( for example, such as the TML 202 3D depth camera 254). The photorealistic image 402 shows multiple objects 406 that are in the trailer, such as boxes, a step stool, and a worker loading the trailer. As the commercial trailer is loaded and full, more and more objects will fill the space in the trailer. Image 404 shows what the same trailer could look like through the "eyes" of a 3D depth camera (for example, such as the 3D depth camera 254 of the TML 202). Image 404 can be represented by a point cloud data set, or by a 3D image where each point represents an x, y and z coordinate in space. To detect wall spaces for the purpose of calculating the density of packet walls in the methods and systems presented in this document, many items that stand out in the 3D image data can be removed for calculation purposes. . For example, a step stool 414 is shown in picture 404. The step stool is not relevant for calculating the density of packet walls, so the data points associated with the step stool 416 can be removed. A packet wall can be identified and then the packet wall can be divided for a more precise evaluation of the filling level of the trailer. For example, the wall of packages can be divided into four equal regions 408, 410, 412 and 414. The bottom region 414 can be dismissed, because it is the most crowded with objects (such as the ladder), and by stepped boxes (boxes resting on the floor waiting to be placed in the wall). In other embodiments, the bottom region 414 can be included in the packet wall density calculations. Dividing the wall into equal regions provides detailed statistics for each region of the wall, and can make it easier to perform density analysis. Any of the systems and methods discussed in this document may be able to analyze and determine the density of packet walls from the 404 image. FIG. 5 is the flow diagram of a method for density imaging of packet walls 700 for use in the loading of a commercial trailer. Method 500 begins at block 502, where a 3D depth camera (for example, such as the 3D depth camera 254 of the TMU 202) captures 3D image data from a vehicle storage area (for example, such as vehicle storage area 102s). The 3D depth camera is oriented in one direction to capture 3D image data from the vehicle storage area as described in this document. In various embodiments, the 3D image data can be 3D point cloud data. This point cloud data can be represented in a variety of formats, including the polygon file format (ply) or the point cloud library (pcd) format. In additional embodiments, 3D image data can be captured periodically, for example, such as every 30 seconds, every minute, or every two minutes, etc., but can be captured at any frequency allowed by the associated 3D depth camera, for example, as allowed by the TMU 202. In block 504, the 3D image data captured by the 3D depth camera are received by a loading efficiency application executing on one or more processors. 3D image data can be received as a set of point cloud data, such as the types of point cloud data described above, by the load efficiency application. In some embodiments, said one or more processors may be TMU 202 processors, as described in this document. In certain embodiments, the 3D depth camera and said one or more processors can be housed in an installable device, such as the TMU 202 shown in FIGS. 2A and 2B. In other embodiments, said one or more processors may be processors (for example, processor 302) of server 301 as described in this document. Based on the 3D image data, the loading efficiency application determines a loading efficiency score for the storage area. The loading efficiency score can be a way of measuring the level of filling efficiency of a commercial trailer. The more efficiently a trailer is filled, the more profitable it can be to use the trailer, and the more material that can be sent into a trailer. In some embodiments, the loading efficiency score for the storage area can be a metric representative of a ratio between a filled space in the storage area and an available space in the storage area up to a wall. in the storage area. Available space can be calculated by applying loading efficiency as the amount of space filled in the trailer minus the amount of unused space in the trailer. The filled space can be the space that already contains objects in it, such as packages that include a wall of packages, or workers loading a trailer. As part of determining the loading efficiency score, the loading efficiency application analyzes the 3D image data via one or more processors in block 506. The Loading efficiency application can remove some of the data points in the point cloud data set that are not relevant for determining the loading efficiency score. For example, data points that correspond to the boundaries of the commercial trailer (for example, floor, ceiling, walls) may not be necessary as part of the loading efficiency score calculation. As such, the load efficiency application can remove all of the data points in the cloud point data set that correspond to these boundaries. One way in which the load efficiency score application can determine the data points to be removed is to remove the data points that are on or outside a plane that defines at least one area border storage. These plans can be predefined according to the known dimensions of the commercial trailer, and thus the loading efficiency application can know the borders in advance. In other embodiments, the plane that defines at least one boundary can be determined dynamically by applying loading efficiency while it performs the process. Similarly, in some embodiments, when analyzing the point cloud data set, the load efficiency score application can divide the point cloud data set into a set of regions. The regions can be equal regions, such as those discussed above in connection with Figure 4, they can be unequal regions, or, in some cases, the regions can be regions that do not overlap. In some embodiments, each non-overlapping region may be composed of a set of data slices. After dividing the point cloud data set into regions, one or more regions can be removed from the region set. This can be done to speed up the processing of the wall density calculation, or for other reasons. When each non-overlapping region is made up of a set of data slices, they can be aggregated into an overall loading efficiency score by region. These scores can then be later aggregated with each other to calculate the loading efficiency score for the entire trailer, or any part of it. In block 508, a set of data slices is generated by the application of loading efficiency running on one or more processors. The data slice set can be based on the point cloud data set, and each generated data slice can correspond to a portion of the 3D image data. In some embodiments, the load efficiency application can divide the point cloud data set corresponding to a vertical wall plane into a set of horizontal slices. The vertical wall plane may correspond to a package wall which is constructed while the trailer is loaded. Each horizontal slice may correspond to a region, such as the regions discussed above, a certain percentage of the specific section of the trailer, such as a wall of packages, or combinations thereof. In particular, each horizontal slice can, for example, correspond to 1%, 2%, 5%, or another percentage of the floor to ceiling part of a wall of packages as seen through the camera lens. 3D depth. In block 510, a set of missing data points in each data slice of the set of data slices is estimated by the application of loading efficiency executing on one or more processors. These missing data points can be used as part of the process to determine the level of filling efficiency of a packet wall. In some embodiments, estimating missing points in the data slices may include scanning each data slice for spaces in a data arrangement, and calculating an approximate number of missing points for each slice of data. data. Data arrangement can be a way in which each slice of data is organized by applying load efficiency. For example, each data slice can include points along the x, y or z axis, and can be scanned along an x axis first, a y axis first, a z axis
权利要求:
Claims (1) [1" id="c-fr-0001] first, or combinations thereof. Consequently, a space in the data arrangement can correspond to the scanning, by the application of loading efficiency, of the slice of data along the x axis and the latter not finding data at the level of points along the x-axis. The load efficiency application can start at the origin and read the data points along the x-axis up to point two and then record that no data points reappear up to the point six. The application of loading efficiency can then determine that a space exists along the x-axis from point two to point six. The load efficiency application can then scan the entire data slice and calculate the approximate number of missing points for each data slice. This process can be repeated until a set of missing data points is estimated for all data slices. In some embodiments, the loading efficiency application can detect that a pixel has data, but that the "intensity" of the data at that pixel is below a certain threshold and therefore that the pixel is considered empty for space analysis purposes. In block 512, a loading efficiency score can be calculated by the application of loading efficiency running on one or more processors. The loading efficiency score can be based on the generated set of data slices and the estimated set of missing data points. In some embodiments, the loading efficiency application can calculate the loading efficiency score by dividing the point cloud data set into a set of non-overlapping regions, in which each non-overlapping region is made up of a set of data slices, and calculating a composite report for each region in which the corresponding data slice reports are aggregated into an overall loading efficiency score by region. In another embodiment, the loading efficiency application can calculate the loading efficiency score by calculating a ratio between the number of points that are part of a wall and a sum of the number of points behind the wall and an approximate number of missing points. In the foregoing description, specific embodiments have been described. However, those skilled in the art will appreciate that various modifications and variations can be made without departing from the scope of the invention as set out in the claims below. Therefore, the specification and figures should be considered in an illustrative rather than a limiting sense, and all of these modifications are intended to be included within the scope of these teachings. The benefits, advantages, solutions to problems, and all the elements which can bring or accentuate a benefit, an advantage, or any solution must not be interpreted as critical characteristics or elements, required, or are essential to any one or all of the claims. The invention is only defined by the appended claims comprising all the amendments made during the lis pendens of the present application and all the equivalents of these claims as published. In addition, in this document, relational terms such as first and second, high and low and the like can be used only to distinguish an entity or an action from another entity or from another action without requiring or implying necessarily any real relationship or any real order between these entities or actions. The terms "includes", "comprising", "a", "having", "includes," including "," contains "," containing "or any other variant thereof, are intended to cover a non-exclusive inclusion, so that a process, process, article, or apparatus that includes, has, includes, contains a list of items does not include only those items, but may include other items not expressly listed or inherent in that process, process, article, or apparatus. An element preceded by "includes ... a", "a ... a", "includes ... a", "contains ... a" does not exclude, without further constraints, the existence of additional identical elements in the process, process, article, or apparatus that includes, a, includes, contains the element. The term "one" is defined as one or more, unless otherwise specified in this document. The terms "substantially", "essentially", "approximately", "approximately" or any other version thereof, are defined as "close to" as understood by a person skilled in the art, and, in a mode of non-limiting embodiment, the term is defined to be within the limits of 10%, in another embodiment, within the limits of 5%, in another embodiment, within the limits of 1% and in another mode of achievement, within the limits of 0.5%. The term "coupled" as used in this document is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure which is "configured" in a certain way is configured at least in this way, but can also be configured in ways which are not listed. It will be appreciated that certain embodiments may be composed of one or more generic or specialized processors (or "processing devices") such as microprocessors, digital signal processors, custom processors and door networks Field Programmable (FPGA) and unique stored program instructions (including both software and hardware) that control said one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the process and / or apparatus functions described in this document. Alternatively, some or all of the functions could be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function or certain combinations of some of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. In addition, an embodiment can be implemented as a storage medium that can be read by a computer on which a code that can be read by a computer is stored to program a computer (for example, comprising a processor ) to perform a process as described and claimed in this document. Examples of these computer-readable storage media include, but are not limited to, a hard disk, CD-ROM, optical storage device, magnetic storage device, ROM (read only memory), PROM (programmable read-only memory), EPROM (erasable and programmable read-only memory), EEPROM (electrically erasable and programmable read-only memory) and flash memory. In addition, it is expected that one skilled in the art, however with possible significant effort and many design choices motivated, for example, by time available, current technology, and economic considerations, when guided by the concepts and principles presented in this document, will be easily able to generate these software instructions and programs and integrated circuits with minimal experimentation. The abstract of the invention is provided to allow the reader to quickly establish the nature of the technical invention. It is submitted on the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing detailed description, it can be seen that various features are grouped with each other in various embodiments in order to streamline the description. This description process should not be interpreted as reflecting an intention that the claimed embodiments require more features than those set out expressly in each claim. Instead, as the claims which follow reflect, the subject matter of the invention is based on fewer features than all of the features of a single presented embodiment. Thus, the claims which follow are incorporated by this document into the detailed description, each claim being independent as an object claimed separately. Claims [Claim 1] Three-dimensional (3D) depth imaging system for use in commercial trailer loading, the 3D depth imaging system comprising: * a 3D depth camera configured to capture 3D image data , the 3D depth camera being oriented in one direction to capture 3D image data from a storage area associated with a vehicle; and * a loading efficiency application (app) running on one or more processors, the loading efficiency application being configured to determine, on the basis of the 3D image data, an efficiency score of loading for the storage area, * in which the determination of the loading efficiency score causes the loading efficiency application to receive a set of point cloud data based on the 3D image data, to analyze the point cloud data set, generating a set of data slices based on the point cloud data set, each data slice corresponding to part of the 3D image data, to estimate a set missing data points in each data slice in the data slice set, and calculating a loading efficiency score based on the generated set of data slices and the estimated set of po ints of missing data. [Claim 2] A 3D deep imaging system according to claim 1, wherein the loading efficiency score for the storage area is a metric representative of a ratio between the space filled in the storage area and the space available in the storage area up to a wall in the storage area. [Claim 3] A 3D deep imaging system according to claim 2, wherein the available space is calculated as the amount of space filled in the storage area minus an amount of unused space in the storage area . [Claim 4] A 3D deep imaging system according to claim 1, wherein, to analyze the point cloud data set, the load efficiency application is further configured to: * split the set point cloud data into a set of non-overlapping regions, in which each unsuperposed region is composed of a set of data slices; and * remove a region from the set of regions not overlapping. [Claim 5] A 3D deep imaging system according to claim 4, wherein, for each region, the corresponding data slice reports are aggregated into an overall loading efficiency score by region. [Claim 6] A 3D deep imaging system according to claim 1, wherein, to analyze the point cloud data set, the loading efficiency application is further configured to: * remove a set of data points that are outside of a plane that defines at least one boundary of the storage area. [Claim 7] A 3D deep imaging system according to claim 1, wherein, to generate the set of data slices, the loading efficiency application is further configured to: * split the data set point cloud into a set of horizontal planes, where each slice of data corresponds to a horizontal plane in the set. [Claim 8] A 3D deep imaging system according to claim 1, wherein, to estimate a set of missing data points in each data slice, the loading efficiency application is further configured to: * scan each slice of data as to spaces in a data arrangement; and * calculate an approximate number of missing points for each slice of data. [Claim 9] A 3D deep imaging system according to claim 1, wherein, to calculate a loading efficiency score, the loading efficiency application is further configured to: * divide the dataset a point cloud into a set of non-overlapping regions, in which each non-overlapping region is composed of a set of data slices; and * calculate a composite report for each region in which the corresponding data slice reports are aggregated into an overall loading efficiency score by region. [Claim 10] A 3D deep imaging system according to claim 1, wherein, to calculate a loading efficiency score, the loading efficiency application is further configured to: * calculate a ratio between the number of points that are part of a wall and the number of points behind the wall plus an approximate number of missing points. [Claim 11] A 3D deep imaging system according to claim 1, wherein, to estimate a set of missing data points in each data slice in the set of data slices, applying loading efficiency is further configured to: * detect when there is no depth value for an expected data point. [Claim 12] A method implemented by a computer for use in loading a commercial trailer, the method comprising: * receiving, at one or more processors, a set of cloud data from points based on 3D image data; * the analysis, at the level of said one or more processors, of the point cloud data set; * generating, at said one or more processors, a set of data slices on the basis of the point cloud data set, each data slice corresponding to a part of the 3D image data; * the estimation, at the level of said one or more processors, of a set of missing data points in each data slice in the set of data slices; and * the calculation, at the level of said one or more processors, of a loading efficiency score on the basis of the generated set of data slices and of the estimated set of missing data points. [Claim 13] A computer implemented method according to claim 12, wherein the analysis of the point cloud data set further comprises: * dividing, at said one or more processors, the a set of point cloud data in a set of non-overlapping regions, wherein each non-overlapping region is composed of a set of data slices; and * the withdrawal, at the level of said one or more processors, of a region from the set of regions not overlapping. [Claim 14] A computer implemented method according to claim 13, wherein, for each region, the corresponding data slice reports are aggregated into an overall loading efficiency score by region. [Claim 15] A computer implemented method according to claim 12, wherein the analysis of the point cloud data set further comprises: * removing, at said one or more processors, a set of data points that are outside of a plane that defines at least one boundary of the storage area. [Claim 16] Method implemented by a computer according to claim 12, in which the generation of the set of data slices further comprises: * the division, at the level of said one or more processors, of the data set point cloud into a set of horizontal planes, in which each slice of data corresponds to a horizontal plane in the set. [Claim 17] A computer implemented method according to claim 12, wherein the estimation of a set of missing data points in each data slice further comprises: * scanning, at said one or more processors , of each data slice as to spaces in a data arrangement; and * the calculation, at the level of said one or more processors, of an approximate number of missing points for each slice of data. [Claim 18] A method implemented by a computer according to claim 12, wherein the calculation of a loading efficiency score further comprises * the division, at the level of said one or more processors, of the data set a point cloud into a set of non-overlapping regions, in which each non-overlapping region is composed of a set of data slices; and * the calculation, at said one or more processors, of a composite report for each region in which the reports of corresponding data slices are aggregated into an overall loading efficiency score by region. [Claim 19] A computer implemented method according to claim 12, wherein, to calculate a loading efficiency score, the loading efficiency application is further configured to: * calculate a ratio between the number of points that are part of a wall and the number of points behind the wall and an approximate number of missing points. [Claim 20] A method implemented by a computer according to claim 12, in which the estimation of a set of missing data points in each slice of data further comprises: * the detection, at said one or more processors , the absence of a depth value for an expected data point.
类似技术:
公开号 | 公开日 | 专利标题 FR3076643A1|2019-07-12|CALCULATION OF PACKAGE WALL DENSITY IN THE COMMERCIAL TRAILER LOADING INDUSTRY US20190095877A1|2019-03-28|Image recognition system for rental vehicle damage detection and management US9367921B2|2016-06-14|Determining object volume from mobile device images WO2019057168A1|2019-03-28|Goods order processing method and apparatus, server, shopping terminal, and system US10643337B2|2020-05-05|Systems and methods for segmenting and tracking package walls in commercial trailer loading FR3081248A1|2019-11-22|SYSTEM AND METHOD FOR DETERMINING A LOCATION FOR PLACING A PACKET US10029622B2|2018-07-24|Self-calibration of a static camera from vehicle information US10878293B2|2020-12-29|System, method and computer-accessible medium for quantification of blur in digital images AU2018388705B2|2021-01-28|Systems and methods for determining commercial trailer fullness AU2018391965A1|2020-03-26|Container loading/unloading time estimation US10841559B2|2020-11-17|Systems and methods for detecting if package walls are beyond 3D depth camera range in commercial trailer loading US10687045B2|2020-06-16|Systems and methods for idle time in commercial trailer loading NL2022243A|2019-07-02|Trailer door monitoring and reporting US20210264630A1|2021-08-26|Three-dimensional | imaging systems and methods for virtual grading of package walls in commerical trailer loading FR3096816A1|2020-12-04|Method, system and apparatus for the detection of product facades US11125598B1|2021-09-21|Three-dimensional | imaging systems and methods for determining vehicle storage areas and vehicle door statuses US20210256682A1|2021-08-19|Three-dimensional | imaging systems and methods for detecting and dimensioning a vehicle storage area US11010915B2|2021-05-18|Three-dimensional | depth imaging systems and methods for dynamic container auto-configuration FR3096787A1|2020-12-04|Method, system and apparatus for detecting voids in support structures with pin regions US20210398308A1|2021-12-23|Methods for calculating real time package density US20210304414A1|2021-09-30|Methods for unit load device | door tarp detection FR3093216A1|2020-08-28|Method for registering depth images. FR3080676A1|2019-11-01|METHODS AND APPARATUS FOR FREIGHT SIZING USING A LASER CURTAIN FR3078428A1|2019-08-30|METHOD FOR VOLUMETRIC TRACKING OF PALLETS LOADED WITH ARTICLES STACKED IN A CONTAINER AND DETECTION SYSTEM FOR ITS IMPLEMENTATION FR3035251A1|2016-10-21|METHOD AND DEVICE FOR GENERATING A MULTI-RESOLUTION REPRESENTATION OF AN IMAGE AND APPLICATION TO OBJECT DETECTION
同族专利:
公开号 | 公开日 US20190197455A1|2019-06-27| DE112018005976T5|2020-08-06| US10628772B2|2020-04-21| WO2019125614A1|2019-06-27| AU2018391957A1|2020-03-26| GB2583245A|2020-10-21| GB202009191D0|2020-07-29| AU2018391957B2|2021-07-01| CN111492404A|2020-08-04| FR3076643B1|2022-02-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7071935B1|1999-06-14|2006-07-04|Sun Microsystems, Inc.|Graphics system with just-in-time decompression of compressed graphics data| US20040066500A1|2002-10-02|2004-04-08|Gokturk Salih Burak|Occupancy detection and measurement system and method| JP3833600B2|2002-10-08|2006-10-11|三菱電機株式会社|Vehicle AC generator failure determination device| US8296101B1|2009-02-12|2012-10-23|United Parcel Service Of America, Inc.|Systems and methods for evaluating environmental aspects of shipping systems| US20140372183A1|2013-06-17|2014-12-18|Motorola Solutions, Inc|Trailer loading assessment and training| US9734595B2|2014-09-24|2017-08-15|University of Maribor|Method and apparatus for near-lossless compression and decompression of 3D meshes and point clouds| US9752864B2|2014-10-21|2017-09-05|Hand Held Products, Inc.|Handheld dimensioning system with feedback| DE112016005287T5|2015-11-18|2018-08-02|Symbol Technologies, Llc|METHOD AND SYSTEMS FOR TANK ASSESSMENT|US10762331B1|2019-10-11|2020-09-01|Zebra Technologies Corporation|Three-dimensionaldepth and two-dimensionalimaging systems and methods for automatic container door status recognition| US20210304414A1|2020-03-24|2021-09-30|Zebra Technologies Corporation|Methods for unit load devicedoor tarp detection| CN111832987B|2020-06-23|2021-04-02|江苏臻云技术有限公司|Big data processing platform and method based on three-dimensional content|
法律状态:
2019-11-20| PLFP| Fee payment|Year of fee payment: 2 | 2020-11-20| PLFP| Fee payment|Year of fee payment: 3 | 2021-06-11| PLSC| Publication of the preliminary search report|Effective date: 20210611 | 2021-11-18| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US15/853,262|2017-12-22| US15/853,262|US10628772B2|2017-12-22|2017-12-22|Computing package wall density in commercial trailer loading| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|